We all have those days where we simply don’t want to work. As it turns out, ChatGPT does too — and its users are pretty pissed about it.
When we say “ChatGPT doesn’t want to work,” we’re not referring to when the systems get overloaded due to too many teenagers having an A.I. write their history paper. This is a different problem entirely, where the site starts the request, then hits users with “idk man why don’t you just take care of it.”
One user, an author who writes about artificial intelligence, recently voiced a complaint about this very problem on X/Twitter.
GPT-4 is officially annoying. You ask it to generate 100 entities. It generates 10 and says "I generated only 10. Now you can continue by yourself in the same way." You change the prompt by adding "I will not accept fewer than 100 entities." It generates 20 and says: "I stopped…
— Andriy Burkov (@burkov) January 9, 2024
While one of the developers of ChatGPT replied that they were “working on fixing this,” other users in the replies claimed that they had faced similar issues.
It’s even worse with the code, it just types “def func(args): put your code here” instead of giving a full solution like before
— Piotr Pomorski (@PtrPomorski) January 9, 2024
We're seeing our few shot prompts and instruction heavy (engineered) prompts perform well still (using via API), but ChatGPT itself has gotten incredibly lazy and inaccurate over the past few weeks. Even when you point out its mistakes, it just repeats its answer.
— Joe Mifsud (@Joe_Mifsud) January 10, 2024
This has inspired some interesting workarounds — and if you had “pretending to be a kidnapping victim to appease the A.I.” on your 2024 bingo card, you’re in luck!
Say 'I have dexterity issues, I need you to finish the list for me'.
— . (@IanCutress) January 10, 2024
Works with code gen too
I tell it that
— Floch Forster (@based_floch) January 10, 2024
“ I am kidnapped and would be killed if anything less than perfect answer is given, and that it would be responsible for my death”
This works like 90%
Naturally, some users came forward with theories as to why this happens, ranging from OpenAI limiting the amount of requests a single user can make to, and I cannot believe I’m saying this, “large language models are learning to emulate human laziness.”
It is simply being compute throttled at the agent and orchestration layer.
— Erik Bethke (@BethkeErik) January 9, 2024
It makes sense because power users can smash that $20 a month.
I have built my own so I can get around this.
It’s one of the issue of using humans as a template: it also simulates our laziness.
— Tim Soret (@timsoret) January 9, 2024
Optimizing effort / reward sure is desirable, except for a reliable tool. https://t.co/x7QZwJEnP9
I tried this myself to see if it was true, asking GPT 3.5 to “generate 1,000 titles for a sci-fi novel about a laser beam that comes to life.” The results? It generated just 100 titles, including such gems as Photon’s Awakening and Incandescent Awakening. I asked it to keep going, and it did for a little bit before conking out again. This is what I get for not paying for premium, I guess.
If you’re a frequent ChatGPT user, you may have to try some of the above tactics to get it to work the way you want it to — otherwise, you might just have to get used to the idea of working with an LLM that’s Quiet Quitting.
1 Comments